Goto

Collaborating Authors

 invariance and equivariance


In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?

Neural Information Processing Systems

It is often said that a deep learning model is ``invariant'' to some specific type of transformation. However, what is meant by this statement strongly depends on the context in which it is made. In this paper we explore the nature of invariance and equivariance of deep learning models with the goal of better understanding the ways that they actually capture these concepts on a formal level. We introduce a family of invariance and equivariance metrics that allow us to quantify these properties in a way that disentangles them from other metrics such as loss or accuracy. We use our metrics to better understand the two most popular methods used to build invariance into networks, data augmentation and equivariant layers. We draw a range of conclusions about invariance and equivariance in deep learning models, ranging from whether initializing a model with pretrained weights has an effect on a trained model's invariance, to the extent to which invariance learned via training can generalize to out-of-distribution data.



bea5955b308361a1b07bc55042e25e54-AuthorFeedback.pdf

Neural Information Processing Systems

We would like to thank all reviewers for their valuable feedback which has helped us improve the paper! Upon acceptance, we will release the code for the model and for the semi-synthetic data generation. Metrics are reported as Mean Std. We will add discussion about this in the conclusion. Evaluation on semi-synthetic data is standard for causal inference methods.



bea5955b308361a1b07bc55042e25e54-AuthorFeedback.pdf

Neural Information Processing Systems

We would like to thank all reviewers for their valuable feedback which has helped us improve the paper! Upon acceptance, we will release the code for the model and for the semi-synthetic data generation. Metrics are reported as Mean Std. We will add discussion about this in the conclusion. Evaluation on semi-synthetic data is standard for causal inference methods.


In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?

Neural Information Processing Systems

It is often said that a deep learning model is invariant'' to some specific type of transformation. However, what is meant by this statement strongly depends on the context in which it is made. In this paper we explore the nature of invariance and equivariance of deep learning models with the goal of better understanding the ways that they actually capture these concepts on a formal level. We introduce a family of invariance and equivariance metrics that allow us to quantify these properties in a way that disentangles them from other metrics such as loss or accuracy. We use our metrics to better understand the two most popular methods used to build invariance into networks, data augmentation and equivariant layers. We draw a range of conclusions about invariance and equivariance in deep learning models, ranging from whether initializing a model with pretrained weights has an effect on a trained model's invariance, to the extent to which invariance learned via training can generalize to out-of-distribution data.


In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?

Kvinge, Henry, Emerson, Tegan H., Jorgenson, Grayson, Vasquez, Scott, Doster, Timothy, Lew, Jesse D.

arXiv.org Artificial Intelligence

It is often said that a deep learning model is "invariant" to some specific type of transformation. However, what is meant by this statement strongly depends on the context in which it is made. In this paper we explore the nature of invariance and equivariance of deep learning models with the goal of better understanding the ways in which they actually capture these concepts on a formal level. We introduce a family of invariance and equivariance metrics that allows us to quantify these properties in a way that disentangles them from other metrics such as loss or accuracy. We use our metrics to better understand the two most popular methods used to build invariance into networks: data augmentation and equivariant layers. We draw a range of conclusions about invariance and equivariance in deep learning models, ranging from whether initializing a model with pretrained weights has an effect on a trained model's invariance, to the extent to which invariance learned via training can generalize to out-of-distribution data.


The importance of invariance in AI 🤖

#artificialintelligence

Compared to computers, humans and most other vertebrates (including some invertebrates), can learn internal representations of things, such as objects, or concepts, unbelievably fast. Instead of requiring millions of labeled data points, a toddler will understand the concept of a chair with only a handful of examples. How? Do most organisms have a large set of hard-coded procedures encoded in their neural circuitry, that were created and accumulated overtime through evolutionary forces? Considering the evidence, this seems to be very unlikely. We know that organisms do have some hard-coded memories that influence their behaviors and actions, but the number of such procedures is limited.